User:Rolandtb/LLM assisted coding for Support related tasks

From MozillaWiki
Jump to navigation Jump to search

2025-12-10 use bugbug, the Mozilla MCP server to integrate with bugzilla and other Mozilla stuff?

Where's the location of our MCP server? I want to add to it
marco, suhaib, ^
ah .mcp.json
and the source for that service?
suhaib
https://github.com/mozilla/bugbug/tree/master/mcp

2025-12-09 Chris Dzombak:: Streamlining my user-level CLAUDE.md

  • QUOTE: I’ve recently streamlined my ~/.claude/CLAUDE.md based on:
    • Informal observations about what Claude does and doesn’t do well
    • A desire to avoid conflicting with principles that seem to be built into Claude Code these days, e.g. planning mode
    • Informal observations about the workflows I tend to use with Claude Code
    • A desire to simplify it and therefore more closely align with e.g. Claude Code on the web
  • It now mainly contains guidelines that Claude should abide by while using its own planning and decision-making tools (or referring decisions to me).

2025-12-09 Mistral vibe: command-line coding assistant powered by Mistral's models

  • QUOTE: Mistral Vibe is a command-line coding assistant powered by Mistral's models. It provides a conversational interface to your codebase, allowing you to use natural language to explore, modify, and interact with your projects through a powerful set of tools. (via GCP in slack: https://github.com/mistralai/mistral-vibe released together with a 24B Model. (Should be fast on quite affordable GPU and macbooks)

2025-12-08 LLMs are amazingly good at writing code de novo according to Oxide computer but careful because nonsense :-) and they are good for revising prose

QUOTE

LLMs are amazingly good at writing code — so much so that there is borderline mass hysteria about LLMs entirely eliminating software engineering as a craft. As with using an LLM to write prose, there is obvious peril here! Unlike prose, however (which really should be handed in a polished form to an LLM to maximize the LLM’s efficacy), LLMs can be quite effective writing code de novo. This is especially valuable for code that is experimental or auxiliary or otherwise throwaway. The closer code is to the system that we ship, the greater care needs to be shown when using LLMs. Even with something that seems natural for LLM contribution (e.g., writing tests), one should still be careful: it’s easy for LLMs to spiral into nonsense on even simple tasks. Still, they can be extraordinarily useful — and can help to provide an entire spectrum of utility in writing software; they shouldn’t be dismissed out of hand.

Wherever LLM-generated code is used, it becomes the responsibility of the engineer. As part of this process of taking responsibility, self-review becomes essential: LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it. Moreover, once in the loop of peer review, generation should more or less be removed: if code review comments are addressed by wholesale re-generation, iterative review becomes impossible.

In short, where LLMs are used to generate code, responsibility, rigor, empathy and teamwork must remain top of mind.